CN109616122A - A kind of visualization hearing aid - Google Patents

A kind of visualization hearing aid Download PDF

Info

Publication number
CN109616122A
CN109616122A CN201811567610.4A CN201811567610A CN109616122A CN 109616122 A CN109616122 A CN 109616122A CN 201811567610 A CN201811567610 A CN 201811567610A CN 109616122 A CN109616122 A CN 109616122A
Authority
CN
China
Prior art keywords
voice
text
processing software
voice signal
sound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811567610.4A
Other languages
Chinese (zh)
Inventor
王让利
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201811567610.4A priority Critical patent/CN109616122A/en
Publication of CN109616122A publication Critical patent/CN109616122A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0272Voice signal separating
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/06Transformation of speech into a non-audible representation, e.g. speech visualisation or speech processing for tactile aids
    • G10L21/10Transforming into visible information
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L2021/02082Noise filtering the noise being echo, reverberation of the speech

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Document Processing Apparatus (AREA)

Abstract

A kind of vision hearing aid, it is intended to provide a kind of relative position that can see interlocutor's voice text identified and sound source by hearing impaired persons by speech recognition and synthetic technology, the intention for understanding interlocutor, plays the voice of the reply text of oneself to complete the process with other people speech exchange.It includes sound collection component, language and characters display unit.Sound collection component is a wearable multi-microphone device, for acquiring, extracting the voice messaging of people around and the relative position information of sound source.Language and characters display unit carries out speech recognition to the acoustic information that acquisition comes, and the text importing after identification is come out, while highlighting to the text of the voice of preferred interlocutor.It shows available reply text, then plays the voice of user's selection or the response text of editor, interlocutor is allowed to hear response voice.To the process for helping hearing impaired persons to complete more quickly to carry out speech exchange communication with interlocutor naturally.

Description

A kind of visualization hearing aid
Technical field
The sound that hearing aid is captured by processing amplifies after reducing noise, echo cancellation process for user Sound, so that hearing impaired individual can catch sound.
Background technique
Existing hearing aid can not provide help for the deaf-mute of the absolutely not sense of hearing.
Existing hearing aid is performed poor in the case where having noisy environmental noise or more people while speaking, can not be complete The noise for shielding ambient enviroment can not also shield the voice of other unrelated personages of surrounding, and user can not be allowed to pass through hearing aid anchor Determine a main problem existing for the voice and existing hearing aid of interlocutor.
Existing hearing aid does not provide the function of allowing user to issue reply voice by hearing aid, in this case, Existing hearing aid can be only done the function of unidirectional hearing aid, and can not provide further help dysphonia or have hair The function that the personage that sound barrier hinders is replied by equipment sounding.
Summary of the invention
The illustrative embodiments conceived according to the present invention, visual vision hearing aid include: sound collection portion Part is a portable or wearable multi-microphone equipment, by the sound being collected by reducing noise, eliminating back Sound, the voice signal for identifying people around, separation and Extraction be multichannel independent voice signal, calculate the sound source of each road voice relative to After the distance of sound collection component and position, by each road voice signal and location information of its sound source relative to sound collection component Language and characters display unit is given by data link transmission;Language and characters display unit includes governor circuit, loudspeaker, display screen And processing software;Show to include the first display area, the second display area and third display area on the display interface of screen.It is main It is text information that circuit and processing software, which are controlled, by the multi-path voice signal identification received;Processing software is by every road voice signal pair The text information answered is displayed in proportion relative to the relative position of sound collection component and distance in the first viewing area by its sound source In domain.This makes hearing impairment personage can see the corresponding text of other people sound of periphery, knows other people meaning of speaking Figure.
The present invention is designed to be badly damaged for the sense of hearing daily life service of personage, sound collection component be once Take formula or wearable device.In routine use, user can be carried around or dress sound collection component, sound collection portion The position of part is exactly the body position of carrier or wearer, sound collection component just facing towards being exactly carrier or wearer Positive direction.
Sound collection component can be used as portable or wearable device and pass through data link to language and characters display unit Location information of the sound source of voice signal and voice signal relative to sound collection component is sent, accessory is also can be used as and is attached to language Sound text importing component.
When language and characters display unit only receives voice signal all the way, processing software is using the road voice signal as head Voice is selected, the text which comes out highlightedly is shown by the time sequencing for receiving the voice signal every time the Two display areas.
When language and characters display unit, which receives, is more than voice signal all the way, processing software can select sound source away from sound The carrier of acquisition component or positive nearest, carrier or wearer of the sound source away from sound collection component the front area of wearer Domain distance recently, the highest voice of intensity of sound as preferred voice, and by the text of the first choice speech recognition out by connecing every time The time sequencing for receiving the voice signal is successively highlightedly shown in the second display area.
When the carrier of sound collection component or the position of wearer change or the sound source position of the voice of people around, After intensity of sound changes, the processing software can be reappraised according to the condition of above-mentioned selection first choice voice and be selected most Qualified voice is as preferred voice, and by the text of the first choice speech recognition out by receiving the voice signal every time Time sequencing is successively highlightedly shown in the second display area.
User can choose the text information that any one place is shown in the first display area, and processing software can be by user For the corresponding voice of the text information selected as preferred voice, the text which is identified should by each receive The time sequencing of voice signal is successively highlightedly shown in the second display area.In the voice signal feelings that can be collected into the voice Under condition, even if the position of sound collection component itself is changed, or there are other voices to be in and be more preferably selected as first choice When the condition of voice, processing software keeps the voice for selecting above-mentioned user as preferred voice, until user selects again Other voices are selected.This allows what user can be convenient to exchange with interlocutor, causes to take in the movement variation of user's body When the change in location of the sound collection component of band or wearing, when surrounding enviroment have more people while speaking, in interlocutor position itself It sets when changing, this can allow the voice of user's anchoring and its interlocutor, in dialogue both sides' change in location or other people language When the variation of sound, keep only showing the text being identified with the voice of its interlocutor in the second display area.
The setting of processing software time according to used in dialogue both sides' maximum single-wheel interactive dialogue process during current session And dynamic adjusts maximum single-wheel and talks with cycle duration data, if sound collection component is in current maximum single-wheel dialogue cycle duration Fail to collect the voice signal of current preferred voice, processing software then thinks that current preferred voice thoroughly disappears, can be by The condition of above-mentioned selection first choice voice reselects new preferred voice, and the text that the preferred speech recognition reselected is gone out It is highlighted in the second display area of display screen.
Processing software can first empty the second viewing area after reselecting preferred voice or user selected preferred voice Inherited literal in domain.
Processing software after the newest word content for increasing display, provides one to a plurality of optional in analyzing the second display area Optimum Matching response word-information display in third display area, given response text information is for interlocutor The text information of text information and its most common speech response of information above;This text of preferred voice is interior above with it The text approximation identical or semantic with its content above in dialog procedure, processing software can select before this user Rong Ruoyu before this The corresponding echo message selected, edit or inputted is placed in the position of override selection;User can directly select respective entries Response text information, or the response text information of selection edit-modify respective entries or directly inputs response text information, locates Selection, edit-modify or the response text information directly inputted are synthesized voice and are played by loudspeaker by reason software.It plays Complete rear governor circuit can issue of short duration slight vibration or light flash and show that speech answering has finished to user.This A process can allow other people to hear the speech answering of user, and allowing dysphonia or has the personage of dysphonia to could be used that voice It imparts one's ideas to other people, improves the speed of both sides' exchanges and dialogues, and process is naturally convenient.
Beneficial effect
Conceive according to the present invention, it is surrounding that vision hearing aid can be such that hearing impairment personage sees in the first display area The text information of the voice of speaker, when someone will engage in the dialogue with it around hearing impairment personage, this can allow the sense of hearing by The scholar that speaks sarcastically is organic to will appreciate that around someone needs to engage in the dialogue with it to exchange.
Conceive according to the present invention, vision hearing aid can be such that hearing impairment personage sees and voice described in its interlocutor Text information, this can allow hearing impairment personage and the dialogue process between its interlocutor to become quicker, easy And it is natural.
Conceive according to the present invention, it, can be automatically by the road when language and characters display unit only receives voice signal all the way The text importing that voice signal is identified is in the second display area.Under quieter environment, around user only When one people speaks, this can allow user in the case where not doing any select and set, and immediately appreciate that voice described in interlocutor Text information and intention.
Conceive according to the present invention, when language and characters display unit is received more than voice signal all the way, processing software meeting Select carrier away from sound collection component or wearer front recently, the carrier away from sound collection component or wearer just before Square region distance is nearest, the highest preferred voice of voice conduct of intensity of sound, and persistently go out the preferred speech recognition Text importing is in the second display area.In the environment of having more people to speak, interlocutor is naturally relative to the position of user It can make the optimal in above-mentioned selection first choice voice of the voice maximum probability of interlocutor with the intensity of sound naturally of interlocutor In the condition first selected;This allows user in the environment of having more people while speaking, and is not doing any the case where select andding set Under, only desired interlocutor can be chosen or reselect immediately by the adjustment of the body position and direction of user.It is this Selection mode very naturally intuitively, will allow user to be very natural, conveniently, efficiently carry out pair in the environment of more people speak Words exchange.
Conceive according to the present invention, the corresponding sound source position of text shown in the first display area has been selected in user Voice as preferred voice after, even if in the case where the position of sound collection component changes or the position of surrounding people Set or in the case that voice intensity changes, processing software will the voice that selects of user shown as preferred voice. This allows user when in use, the variation of sound collection component locations will not be caused because of limb action and reselect voice or Voice can not be selected, this would not influence user and carry out normal limb action during dialogue;It will not be because of dialogue Person's moving and reselecting preferred voice in session space;In will not speaking up or speak because of the unrelated personage of surrounding Preferred voice is reselected close to user.This provides a kind of function of being anchored interlocutor and its voice for user, this For user noisy and competitive voice there are in the environment of provide convenience selection interlocutor means.
Conceive according to the present invention, the text information of preferred voice continuously can be successively highlighted on by processing software In second display area, this can allow user to see the text information for the voice said before interlocutor at any time, to allow use More convenient the whole of understanding interlocutor of person is intended to.
Conceive according to the present invention, processing software is according to dialogue both sides' maximum single-wheel interactive dialogue mistake during current session The Cheng Suoyong time is arranged and dynamic adjusts maximum single-wheel dialogue cycle duration data, when sound collection component is in the maximum single-wheel pair After talking about the voice signal for failing to collect the preferred voice in waiting time, processing software will reselect preferred voice;This User can be allowed not have to the voice for after current session person's end-of-dialogue, reselecting other people naturally as preferred voice Do any other setting or selection movement.
Conceive according to the present invention, every time by the newest word-information display of preferred voice behind the second display area, locates Reason software can analyze the text information and its information above, show that most common one or more is optional on third display area Response text, shown response text is the most common response letter for interlocutor's text information and its information above Breath;If this text of preferred voice and its content above with before text and its content above it is identical or semantic approximate, place The corresponding return information that user can be selected before this, edited or inputted by reason software is in the position of override selection;This User can be allowed faster to select or edit echo message, editor is reduced and respond the time that text occupies;User's selection, editor The response text modified or directly inputted synthesizes voice by governor circuit and is played by loudspeaker;After finishing, voice text Word display unit can issue of short duration slight vibration or light flash to indicate that voice answering is played and complete.This, which can allow, to send out Sound or crackjaw personage make a sound during dialogue, reach the similar function of lifting up one's voice.Allowing can not pronounce Or the inefficient communication means such as crackjaw personage can be no longer dependent on sign language in dialog procedure, text is write, realize people Voice dialogue between people exchanges.
Detailed description of the invention
Attached drawing 1 is each work of the visual vision hearing aid for the illustrative embodiments conceived according to the present invention Connection and operating position figure between component.
Appended drawing reference: sound collection component 10, language and characters display unit 20, loudspeaker 21 show screen 22, and first is aobvious Show region 30, the second display area 31, third display area 32, data link 40.
Specific embodiment
Various illustrative embodiments are described more fully hereinafter with reference to attached drawing, some exemplary implementations are shown Mode.However, present inventive concept can be embodied there are many different forms, and should not be construed as limited to explain here The illustrative embodiments stated.
The exemplary embodiments of the present invention idealization illustrative embodiments that the present invention will be described in detail conceives.Therefore, It can carry out the various change of diagram.Illustrative embodiments are not limited to attached drawing and the shape including component and between component The variation of connection relationship.
Hereinafter, the illustrative embodiments conceived according to the present invention will describe visual vision hearing aid with reference to Fig. 1 Device.
As shown in Fig. 1, visualization vision hearing aid includes sound collection component 10, is one portable or wearable Multi-microphone equipment carries out reduction noise to the sound of acquisition, elimination echo, identifies different sound sources, know for acquiring sound Do not isolate include in collected sound all different peoples each road independent voice signal, calculate the sound of each road voice Position of the source relative to sound collection component 10, each road voice signal and its sound source that will identify that are relative to sound collection component 10 location information is transferred to language and characters display unit 20 by data link 40.
Language and characters display unit 20 is a portable electronic device comprising loudspeaker 21 shows screen 22 and place Manage software.It shows on the display interface of screen 22 and includes: the first display area 30, the second display area 31 and third viewing area Domain 32.
Governor circuit is located at the inside of language and characters display unit, receives sound collection component by data link 40 10 one or more location informations of independent voice signal and its sound source relative to sound collection component 10 sent, processing software Identify text expressed in every road voice.Processing software is opposite by the sound source of the corresponding voice of text by the text after identification It is displayed in proportion in the relative position of sound collection component 10 and distance in the corresponding position of the first display area 30.When there is multichannel It is longer for the text of the every road voice of display as much as possible in the first display area 30 when independent voice is shown simultaneously Text information will only show a part or scroll, to reduce the screen area of occupancy required for showing every road voice.
When governor circuit only receives voice messaging all the way, processing software is using the voice as preferred voice;Governor circuit After having received the voice signal every time, processing software identifies the text of the voice signal and receives the voice by governor circuit The text that successively will identify that of time sequencing be highlightedly shown in the second display area 31.
When governor circuit is received more than voice signal all the way, processing software can select the carrier away from sound collection component Or wearer front recently, immediately ahead of the carrier away from sound collection component or wearer region distance recently, intensity of sound most High voice is as preferred voice;After governor circuit receives the first choice voice signal every time, processing software identifies the first choice The text of voice is simultaneously successively highlightedly shown the text in the second display by the time sequencing that governor circuit receives the voice In region 31.
When the carrier of sound collection component 10 or the body position of wearer change, the position of interlocutor or sound Intensity changes, around the position of other talkers or intensity of sound change, and processing software can reselect optimal First meet the voice of above-mentioned alternative condition as preferred voice, and after having received the first choice voice signal every time, by the first choice The time sequencing that the text that speech recognition goes out is received the first choice voice by governor circuit every time is successively highlightedly shown in second In display area 31.
The text of any voice all the way shown in the first display area of user's selection 30, makes the voice become preferred language Sound;After receiving the first choice voice every time, processing software identifies the text of the first choice voice and by governor circuit governor circuit The time sequencing for receiving the voice is successively highlightedly shown in the second display area 31.
After user has selected preferred voice, before user reselects other voices as preferred voice, the language Sound will be always as preferred voice.
After user has selected preferred voice, before the first choice voice thoroughly disappears, which will be always as first choice Voice.
The setting of processing software time according to used in dialogue both sides' maximum single-wheel interactive dialogue process during current session Cycle duration data are talked in maximum single-wheel, and dynamic adjusts the data, the maximum single-wheel that sound collection component 10 is arranged in system After talking with the voice signal for failing to collect preferred voice in cycle duration, then it is assumed that current speech dialog procedure is over, Processing software can be reselected new preferred voice, and the preferred voice that will be reselected by the condition of above-mentioned selection first choice voice The text identified is successively highlighted in the second display area of display screen.
When processing software adds the new text of display every time in the second display area 31, the text and its correlation can be searched The matched response text of most probable above, and the response text information found is shown in third display area 32 by item; If this text of preferred voice and its content above in dialog procedure before this text and its content above it is identical or semantic Approximation, the corresponding return information that user can be selected before this, edited or inputted by processing software are in override selection Position;User can click selection or edit-modify responds content item, can also directly input unrelated with existing response text New response text;Processing software by the text of the response content of above-mentioned selection, edit-modify or input synthesize voice by Governor circuit is played by loudspeaker 21;After finishing, governor circuit can issue of short duration slight vibration or of short duration light Flashing is finished with showing that response text is played.

Claims (6)

1. a kind of vision hearing aid includes sound collection component, language and characters display unit and data link, the voice text Word display unit includes governor circuit, loudspeaker, display screen and processing software;It is wrapped on the display interface of the display screen Containing the first display area, the second display area and third display area;It is characterized in that, the sound collection component is one Portable or wearable multi-microphone equipment, the sound collection component is by identifying people around in collected sound Voice signal, everyone speech signal separation is extracted as independent voice signal, calculates independent voice signal described in every road Position of the sound source relative to the sound collection component, independent voice signal described in the every road extracted and its sound source is opposite The language and characters display unit is given by the data link transmission in the location information of the sound collection component;It is described Governor circuit receive the location information of voice signal described in every road and its sound source relative to the sound collection component, the place Reason software identifies the text information of voice signal described in every road, will identify that the text information come is opposite by its sound source It is displayed in proportion on first display area in the position of the sound collection component and distance;The processing software by The text that wherein voice signal is identified all the way is selected highlightedly to be shown in described second in the voice signal aobvious Show in region;The processing software according in the second display area newest text and its associated text above content provide one or more The response content of the optional editable Optimum Matching of item is shown in the third display area, user's selection, editor or straight After connecing input response content, response content is synthesized voice and is played by the loudspeaker by the processing software.
2. a kind of vision hearing aid according to claim 1, which is characterized in that the language and characters display unit is only When receiving voice signal all the way and its location information, the processing software using the voice signal as preferred voice, The text that the preferred voice is identified is received the first choice by the governor circuit by the processing software every time Successively highlightedly addition is shown in second display area to the time sequencing of voice signal.
3. a kind of vision hearing aid according to claim 1, which is characterized in that the language and characters display unit is connecing In the case where receiving more than voice signal all the way and its location information, the processing software chooses the sound source position of the voice Sound source position positioned at the carrier of the sound collection component or the front of wearer apart from nearest, the described voice is away from described Sound collection component carrier or nearest, the described voice of region distance immediately ahead of wearer the highest voice of intensity of sound As preferred voice, the text that the preferred voice is identified by the processing software is each by the governor circuit The time sequencing for receiving the preferred voice signal is highlightedly shown in second display area.
4. a kind of vision hearing aid according to claim 1, which is characterized in that in user's selection first display After any one text information shown in region, the processing software chooses the corresponding language of text of user's selection For sound as preferred voice, the text that the processing software goes out the preferred speech recognition is each by the governor circuit The time sequencing for receiving the preferred voice signal is successively highlighted on second display area;Further, institute Stating processing software can be always using the voice as preferred voice until user has reselected other voices as newly Preferred voice.
5. a kind of vision hearing aid described in -4 according to claim 1, which is characterized in that the processing software is according to current session The maximum single-wheel dialogue cycle duration data of the setting of time used in dialogue both sides' maximum single-wheel interactive dialogue process in the process, and with Dialog procedure dynamic adjusts the maximum single-wheel dialogue cycle duration data, and the sound collection component is single in the maximum After wheel talks with the voice signal for failing to collect the preferred voice in cycle duration, the processing software will reselect head Select voice.
6. a kind of vision hearing aid described in -5 according to claim 1, which is characterized in that the processing software is according to described The content of every newest word content in second display area and its associated text above provides and shows one to a plurality of optimal The response content matched, in selection or edit-modify or after directly inputting response content, the processing software can incite somebody to action user The text for the response content that the user finally determines synthesizes voice and is played by the loudspeaker;The master control Described in circuit carries out of short duration slight vibration after playing the voice of text of the responses content or light flash shows The voice for responding the text of content has finished.
CN201811567610.4A 2018-12-25 2018-12-25 A kind of visualization hearing aid Pending CN109616122A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811567610.4A CN109616122A (en) 2018-12-25 2018-12-25 A kind of visualization hearing aid

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811567610.4A CN109616122A (en) 2018-12-25 2018-12-25 A kind of visualization hearing aid

Publications (1)

Publication Number Publication Date
CN109616122A true CN109616122A (en) 2019-04-12

Family

ID=66009866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811567610.4A Pending CN109616122A (en) 2018-12-25 2018-12-25 A kind of visualization hearing aid

Country Status (1)

Country Link
CN (1) CN109616122A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111358066A (en) * 2020-03-10 2020-07-03 中国人民解放军陆军军医大学第一附属医院 Protective clothing based on speech recognition
CN113825079A (en) * 2021-09-11 2021-12-21 武汉左点科技有限公司 Method and device for enhancing sound wave acceptance of hearing aid
CN114143591A (en) * 2021-11-26 2022-03-04 网易(杭州)网络有限公司 Subtitle display method, device, terminal and machine-readable storage medium
CN114615609A (en) * 2022-03-15 2022-06-10 深圳市昂思科技有限公司 Hearing aid control method, hearing aid device, apparatus, device and computer medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003223199A (en) * 2002-01-28 2003-08-08 Telecommunication Advancement Organization Of Japan Preparation support system for writing-up text for superimposed character and semiautomatic superimposed character program production system
CN201860365U (en) * 2010-05-26 2011-06-08 康佳集团股份有限公司 Mobile phone device for deaf-mute
CN104503572A (en) * 2014-12-16 2015-04-08 芜湖乐锐思信息咨询有限公司 Voice-text interaction and conversion device
CN105007557A (en) * 2014-04-16 2015-10-28 上海柏润工贸有限公司 Intelligent hearing aid with voice identification and subtitle display functions
CN105103457A (en) * 2013-03-28 2015-11-25 三星电子株式会社 Portable terminal, hearing aid, and method of indicating positions of sound sources in the portable terminal
CN105554662A (en) * 2015-06-30 2016-05-04 宇龙计算机通信科技(深圳)有限公司 Hearing-aid glasses and hearing-aid method
CN106686223A (en) * 2016-12-19 2017-05-17 中国科学院计算技术研究所 A system and method for assisting dialogues between a deaf person and a normal person, and a smart mobile phone
CN108962254A (en) * 2018-06-11 2018-12-07 北京佳珥医学科技有限公司 For assisting the methods, devices and systems and augmented reality glasses of hearing-impaired people
CN109032545A (en) * 2018-06-11 2018-12-18 北京佳珥医学科技有限公司 For providing the method and apparatus and augmented reality glasses of sound source information

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003223199A (en) * 2002-01-28 2003-08-08 Telecommunication Advancement Organization Of Japan Preparation support system for writing-up text for superimposed character and semiautomatic superimposed character program production system
CN201860365U (en) * 2010-05-26 2011-06-08 康佳集团股份有限公司 Mobile phone device for deaf-mute
CN105103457A (en) * 2013-03-28 2015-11-25 三星电子株式会社 Portable terminal, hearing aid, and method of indicating positions of sound sources in the portable terminal
CN105007557A (en) * 2014-04-16 2015-10-28 上海柏润工贸有限公司 Intelligent hearing aid with voice identification and subtitle display functions
CN104503572A (en) * 2014-12-16 2015-04-08 芜湖乐锐思信息咨询有限公司 Voice-text interaction and conversion device
CN105554662A (en) * 2015-06-30 2016-05-04 宇龙计算机通信科技(深圳)有限公司 Hearing-aid glasses and hearing-aid method
CN106686223A (en) * 2016-12-19 2017-05-17 中国科学院计算技术研究所 A system and method for assisting dialogues between a deaf person and a normal person, and a smart mobile phone
CN108962254A (en) * 2018-06-11 2018-12-07 北京佳珥医学科技有限公司 For assisting the methods, devices and systems and augmented reality glasses of hearing-impaired people
CN109032545A (en) * 2018-06-11 2018-12-18 北京佳珥医学科技有限公司 For providing the method and apparatus and augmented reality glasses of sound source information

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111358066A (en) * 2020-03-10 2020-07-03 中国人民解放军陆军军医大学第一附属医院 Protective clothing based on speech recognition
CN113825079A (en) * 2021-09-11 2021-12-21 武汉左点科技有限公司 Method and device for enhancing sound wave acceptance of hearing aid
CN114143591A (en) * 2021-11-26 2022-03-04 网易(杭州)网络有限公司 Subtitle display method, device, terminal and machine-readable storage medium
CN114615609A (en) * 2022-03-15 2022-06-10 深圳市昂思科技有限公司 Hearing aid control method, hearing aid device, apparatus, device and computer medium
CN114615609B (en) * 2022-03-15 2024-01-30 深圳市昂思科技有限公司 Hearing aid control method, hearing aid device, apparatus, device and computer medium

Similar Documents

Publication Publication Date Title
CN109616122A (en) A kind of visualization hearing aid
US20220159403A1 (en) System and method for assisting selective hearing
US9769296B2 (en) Techniques for voice controlling bluetooth headset
US9949056B2 (en) Method and apparatus for presenting to a user of a wearable apparatus additional information related to an audio scene
US6882971B2 (en) Method and apparatus for improving listener differentiation of talkers during a conference call
CN108156550B (en) Playing method and device of headset
CN108762494B (en) Method, device and storage medium for displaying information
US20170243582A1 (en) Hearing assistance with automated speech transcription
WO2021136962A1 (en) Hearing aid systems and methods
JP2016051081A (en) Device and method of sound source separation
US20180054688A1 (en) Personal Audio Lifestyle Analytics and Behavior Modification Feedback
US20230164509A1 (en) System and method for headphone equalization and room adjustment for binaural playback in augmented reality
CN111741394A (en) Data processing method and device and readable medium
JP2020095121A (en) Speech recognition system, generation method for learned model, control method for speech recognition system, program, and moving body
CN104851423B (en) Sound information processing method and device
US20240135951A1 (en) Mapping sound sources in a user interface
CN107767862B (en) Voice data processing method, system and storage medium
CN110176231B (en) Sound output system, sound output method, and storage medium
KR20140093459A (en) Method for automatic speech translation
EP3288035B1 (en) Personal audio analytics and behavior modification feedback
CN112673423A (en) In-vehicle voice interaction method and equipment
CN111988705A (en) Audio processing method, device, terminal and storage medium
CN113066513B (en) Voice data processing method and device, electronic equipment and storage medium
US20220172711A1 (en) System with speaker representation, electronic device and related methods
CN112331179A (en) Data processing method and earphone accommodating device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190412